1,032 research outputs found

    Zero-Shot Deep Domain Adaptation

    Full text link
    Domain adaptation is an important tool to transfer knowledge about a task (e.g. classification) learned in a source domain to a second, or target domain. Current approaches assume that task-relevant target-domain data is available during training. We demonstrate how to perform domain adaptation when no such task-relevant target-domain data is available. To tackle this issue, we propose zero-shot deep domain adaptation (ZDDA), which uses privileged information from task-irrelevant dual-domain pairs. ZDDA learns a source-domain representation which is not only tailored for the task of interest but also close to the target-domain representation. Therefore, the source-domain task of interest solution (e.g. a classifier for classification tasks) which is jointly trained with the source-domain representation can be applicable to both the source and target representations. Using the MNIST, Fashion-MNIST, NIST, EMNIST, and SUN RGB-D datasets, we show that ZDDA can perform domain adaptation in classification tasks without access to task-relevant target-domain training data. We also extend ZDDA to perform sensor fusion in the SUN RGB-D scene classification task by simulating task-relevant target-domain representations with task-relevant source-domain data. To the best of our knowledge, ZDDA is the first domain adaptation and sensor fusion method which requires no task-relevant target-domain data. The underlying principle is not particular to computer vision data, but should be extensible to other domains.Comment: This paper is accepted to the European Conference on Computer Vision (ECCV), 201

    Object Contour and Edge Detection with RefineContourNet

    Full text link
    A ResNet-based multi-path refinement CNN is used for object contour detection. For this task, we prioritise the effective utilization of the high-level abstraction capability of a ResNet, which leads to state-of-the-art results for edge detection. Keeping our focus in mind, we fuse the high, mid and low-level features in that specific order, which differs from many other approaches. It uses the tensor with the highest-levelled features as the starting point to combine it layer-by-layer with features of a lower abstraction level until it reaches the lowest level. We train this network on a modified PASCAL VOC 2012 dataset for object contour detection and evaluate on a refined PASCAL-val dataset reaching an excellent performance and an Optimal Dataset Scale (ODS) of 0.752. Furthermore, by fine-training on the BSDS500 dataset we reach state-of-the-art results for edge-detection with an ODS of 0.824.Comment: Keywords: Object Contour Detection, Edge Detection, Multi-Path Refinement CN

    A note using mergers and acquisitions to gain competitive advantage in the United States in the case of Latin American MNCs

    Get PDF
    Author's OriginalThe "new" economic and business climate in Latin America, fostered by multilateral trade agreements such as NAFTA, MERCOSUR, and the ANDEAN Pact, suggests that Latin American (LA) firms must become more aggressive and competitive in order to survive. Foreign direct investment in the form of mergers and acquisitions (M&A) is often an effective way of competing in a tough global environment. Using transactions data collected from Security Data Company's Worldwide Merger and Acquisition database, this paper analyzes the relative involvement of firms from five LA countries (Argentina, Brazil, Chile, Mexico, and Venezuela) in acquiring targets in the United States of America. Transaction characteristics examined and summarized include the annual distribution (1985-1998) of the deals, the industrial sector of the target firm, the form of acquisition method used, and the form of ownership of the target firm. The trends are analyzed, and implications for managers are indicated.Milman, C. D., D’Mello, J. P., Aybar, B., & Arbaláez, H. (2001). A note using mergers and acquisitions to gain competitive advantage in the United States in the case of Latin American MNCs. International Review of Financial Analysis, 10(3), 323-332. doi:10.1016/S1057-5219(01)00056-

    Runtime Distributions and Criteria for Restarts

    Full text link
    Randomized algorithms sometimes employ a restart strategy. After a certain number of steps, the current computation is aborted and restarted with a new, independent random seed. In some cases, this results in an improved overall expected runtime. This work introduces properties of the underlying runtime distribution which determine whether restarts are advantageous. The most commonly used probability distributions admit the use of a scale and a location parameter. Location parameters shift the density function to the right, while scale parameters affect the spread of the distribution. It is shown that for all distributions scale parameters do not influence the usefulness of restarts and that location parameters only have a limited influence. This result simplifies the analysis of the usefulness of restarts. The most important runtime probability distributions are the log-normal, the Weibull, and the Pareto distribution. In this work, these distributions are analyzed for the usefulness of restarts. Secondly, a condition for the optimal restart time (if it exists) is provided. The log-normal, the Weibull, and the generalized Pareto distribution are analyzed in this respect. Moreover, it is shown that the optimal restart time is also not influenced by scale parameters and that the influence of location parameters is only linear

    Anatomically Constrained Video-CT Registration via the V-IMLOP Algorithm

    Full text link
    Functional endoscopic sinus surgery (FESS) is a surgical procedure used to treat acute cases of sinusitis and other sinus diseases. FESS is fast becoming the preferred choice of treatment due to its minimally invasive nature. However, due to the limited field of view of the endoscope, surgeons rely on navigation systems to guide them within the nasal cavity. State of the art navigation systems report registration accuracy of over 1mm, which is large compared to the size of the nasal airways. We present an anatomically constrained video-CT registration algorithm that incorporates multiple video features. Our algorithm is robust in the presence of outliers. We also test our algorithm on simulated and in-vivo data, and test its accuracy against degrading initializations.Comment: 8 pages, 4 figures, MICCA

    Deep Adversarial Attention Alignment for Unsupervised Domain Adaptation: The Benefit of Target Expectation Maximization

    Full text link
    © 2018, Springer Nature Switzerland AG. In this paper, we make two contributions to unsupervised domain adaptation (UDA) using the convolutional neural network (CNN). First, our approach transfers knowledge in all the convolutional layers through attention alignment. Most previous methods align high-level representations, e.g., activations of the fully connected (FC) layers. In these methods, however, the convolutional layers which underpin critical low-level domain knowledge cannot be updated directly towards reducing domain discrepancy. Specifically, we assume that the discriminative regions in an image are relatively invariant to image style changes. Based on this assumption, we propose an attention alignment scheme on all the target convolutional layers to uncover the knowledge shared by the source domain. Second, we estimate the posterior label distribution of the unlabeled data for target network training. Previous methods, which iteratively update the pseudo labels by the target network and refine the target network by the updated pseudo labels, are vulnerable to label estimation errors. Instead, our approach uses category distribution to calculate the cross-entropy loss for training, thereby ameliorating the error accumulation of the estimated labels. The two contributions allow our approach to outperform the state-of-the-art methods by +2.6% on the Office-31 dataset

    Color Image Segmentation by Voronoi Partitions

    Get PDF
    We address the issue of low-level segmentation of color images. The proposed approach is based on the formulation of the problem as a generalized Voronoi partition of the image domain. In this context, a segmentation is determined by the definition of a distance between points of the image and the selection of a set of sites. The distance is defined by considering the low-level attributes of the image and, particularly, the color information. We divide the segmentation task in three successive sub-tasks, treated in the framework of Voronoi partitions : pre-segmentation, hierarchical representation and contour extraction.Nous étudions le problème de la segmentation de bas niveau pour les images couleur. L'approche proposée consiste à modéliser la segmentation d'une image comme une partition de Voronoï généralisée de son domaine. Dans ce contexte, segmenter une image couleur revient à définir une distance appropriée entre points de l'image et à choisir un ensemble de sites. La distance est définie en considérant les attributs de bas niveau de l'image et, en particulier, l'information fournie par la couleur. La démarche adoptée repose sur la division du problème de la segmentation en trois sous-tâches successives, traitées dans le cadre des partitions de Voronoï : la pré-segmentation, la représentation hiérarchique et l'extraction de contours

    Climbing: A Unified Approach for Global Constraints on Hierarchical Segmentation

    Get PDF
    International audienceThe paper deals with global constraints for hierarchical segmentations. The proposed framework associates, with an input image, a hierarchy of segmentations and an energy, and the subsequent optimization problem. It is the first paper that compiles the different global constraints and unifies them as Climbing energies. The transition from global optimization to local optimization is attained by the h-increasingness property, which allows to compare parent and child partition energies in hierarchies. The laws of composition of such energies are established and examples are given over the Berkeley Dataset for colour and texture segmentation

    Resolution-Independent Meshes of Superpixels

    Get PDF
    The over-segmentation into superpixels is an important preprocessing step to smartly compress the input size and speed up higher level tasks. A superpixel was traditionally considered as a small cluster of square-based pixels that have similar color intensities and are closely located to each other. In this discrete model the boundaries of superpixels often have irregular zigzags consisting of horizontal or vertical edges from a given pixel grid. However digital images represent a continuous world, hence the following continuous model in the resolution-independent formulation can be more suitable for the reconstruction problem. Instead of uniting squares in a grid, a resolution-independent superpixel is defined as a polygon that has straight edges with any possible slope at subpixel resolution. The harder continuous version of the over-segmentation problem is to split an image into polygons and find a best (say, constant) color of each polygon so that the resulting colored mesh well approximates the given image. Such a mesh of polygons can be rendered at any higher resolution with all edges kept straight. We propose a fast conversion of any traditional superpixels into polygons and guarantees that their straight edges do not intersect. The meshes based on the superpixels SEEDS (Superpixels Extracted via Energy-Driven Sampling) and SLIC (Simple Linear Iterative Clustering) are compared with past meshes based on the Line Segment Detector. The experiments on the Berkeley Segmentation Database confirm that the new superpixels have more compact shapes than pixel-based superpixels

    Estimation of the Prevalence of Undiagnosed and Diagnosed HIV in an Urban Emergency Department

    Get PDF
    To estimate the prevalence of undiagnosed HIV, the prevalence of diagnosed HIV, and proportion of HIV that is undiagnosed in populations with similar demographics as the Universal Screening for HIV in the Emergency Room (USHER) Trial and the Brigham and Women's Hospital (BWH) Emergency Department (ED) in Boston, MA. We also sought to estimate these quantities within demographic and risk behavior subgroups.We used data from the USHER Trial, which was a randomized clinical trial of HIV screening conducted in the BWH ED. Since eligible participants were HIV-free at time of enrollment, we were able to calculate the prevalence of undiagnosed HIV. We used data from the Massachusetts Department of Public Health (MA/DPH) to estimate the prevalence of diagnosed HIV since the MA/DPH records the number of persons within MA who are HIV-positive. We calculated the proportion of HIV that is undiagnosed using these estimates of the prevalence of undiagnosed and diagnosed HIV. Estimates were stratified by age, sex, race/ethnicity, history of testing, and risk behaviors.The overall expected prevalence of diagnosed HIV in a population similar to those presenting to the BWH ED was 0.71% (95% CI: 0.63%, 0.78%). The prevalence of undiagnosed HIV was estimated at 0.22% (95% CI: 0.10%, 0.42%) and resultant overall prevalence was 0.93%. The proportion of HIV-infection that is undiagnosed in this ED-based setting was estimated to be 23.7% (95% CI: 11.6%, 34.9%) of total HIV-infections.Despite different methodology, our estimate of the proportion of HIV that is undiagnosed in an ED-setting was similar to previous estimates based on national surveillance data. Universal routine testing programs in EDs should use these data to help plan their yield of HIV detection
    • …
    corecore